52 research outputs found

    Indigenous family violence : an attempt to understand the problems and inform appropriate and effective responses to criminal justice system intervention

    Get PDF
    Whilst high levels of concern about the prevalence of family violence within Indigenous communities have long been expressed, progress in the development of evidence-based intervention programs for known perpetrators has been slow. This review of the literature aims to provide a resource for practitioners who work in this area, and a framework from within which culturally specific violence prevention programs can be developed and delivered. It is suggested that effective responses to Indigenous family violence need to be informed by culturally informed models of violence, and that significant work is needed to develop interventions that successfully manage the risk of perpetrators of family violence committing further offences.<br /

    Understanding Gender Inequality in Poverty and Social Exclusion through a Psychological Lens:Scarcities, Stereotypes and Suggestions

    Get PDF

    Artificial Driving Intelligence and Moral Agency: Examining the Decision Ontology of Unavoidable Road Traffic Accidents through the Prism of the Trolley Dilemma

    No full text
    The question of the capacity of artificial intelligence to make moral decisions has been a key focus of investigation in robotics for decades. This question has now become pertinent to automated vehicle technologies, as a question of understanding the capacity of artificial driving intelligence to respond to unavoidable road traffic accidents. Artificial driving intelligence will make a calculated decision that could equate to deciding who lives and who dies. In calculating such important decisions, does the driving intelligence require moral intelligence and a capacity to make informed moral decisions? Artificial driving intelligence will be determined by at very least, state laws, driving codes, and codes of conduct relating to driving behaviour and safety. Does it also need to be informed by ethical theories, human values, and human rights frameworks? If so, how can this be achieved and how can we ensure there are no moral biases in the moral decision-making algorithms? The question of moral capacity is complex and has become the ethical focal point of this technology. Research has centred on applying Philippa Foot’s famous trolley dilemma. We claim that before applications attempt to focus on moral theories, there is a necessary precedent to utilise the trolley dilemma as an ontological experiment. The trolley dilemma is succinct in identifying important ontological differences between human driving intelligence and artificial driving intelligence. In this paper, we argue that when the trolley dilemma is focused upon ontology, it has the potential to become an important elucidatory tool. It can act as a prism through which one can perceive different ontological aspects of driving intelligence and assess response decisions to unavoidable road traffic accidents. The identification of the ontological differences is integral to understanding the underlying variances that support human and artificial driving decisions. Ontologically differentiating between these two contexts allows for a more complete interrogation of the moral decision-making capacity of the artificial driving intelligence

    Autonomous vehicles and embedded artificial intelligence: The challenges of framing machine driving decisions

    Get PDF
    With the advent of autonomous vehicles society will need to confront a new set of risks which, for the first time, includes the ability of socially embedded forms of artificial intelligence to make complex risk mitigation decisions: decisions that will ultimately engender tangible life and death consequences. Since AI decisionality is inherently different to human decision-making processes, questions are therefore raised regarding how AI weighs decisions, how we are to mediate these decisions, and what such decisions mean in relation to others. Therefore, society, policy, and end-users, need to fully understand such differences. While AI decisions can be contextualised to specific meanings, significant challenges remain in terms of the technology of AI decisionality, the conceptualisation of AI decisions, and the extent to which various actors understand them. This is particularly acute in terms of analysing the benefits and risks of AI decisions. Due to the potential safety benefits, autonomous vehicles are often presented as significant risk mitigation technologies. There is also a need to understand the potential new risks which autonomous vehicle driving decisions may present. Such new risks are framed as decisional limitations in that artificial driving intelligence will lack certain decisional capacities. This is most evident in the inability to annotate and categorise the driving environment in terms of human values and moral understanding. In both cases there is a need to scrutinise how autonomous vehicle decisional capacity is conceptually framed and how this, in turn, impacts a wider grasp of the technology in terms of risks and benefits. This paper interrogates the significant shortcomings in the current framing of the debate, both in terms of safety discussions and in consideration of AI as a moral actor, and offers a number of ways forward

    Creating ethics guidelines for artificial intelligence and big data analytics customers: The case of the consumer european insurance market

    Get PDF
    The research aims to provide a deep dive into the increasing ethical questions and contexts relating to the insurance industry’s sophisticated use of big data analytics, AI, and machine learning to sustain and develop its products and services. While the commercial opportunities are clear, there are questions of what costs the commercial benefits have in terms of how data are used and if the use leads to further tensions regarding privacy, surveillance, and profiling

    Ethical AI for automated bus lane enforcement

    Get PDF
    There is an explosion of camera surveillance in our cities today. As a result, the risks of privacy infringement and erosion are growing, as is the need for ethical solutions to minimise the risks. This research aims to frame the challenges and ethics of using data surveillance technologies in a qualitative social context. A use case is presented which examines the ethical data required to automatically enforce bus lanes using camera surveillance and proposes ways of minimising the risks of privacy infringement and erosion in that scenario. What we seek to illustrate is that there is a challenge in using technologies in positive, socially responsible ways. To do that, we have to better understand the use case and not just the present, but also the downstream risks, and the downstream ethical questions. There is a gap in the literature in this aspect as well as a gap in the actual thinking of researchers in terms of understanding and responding to it. A literature review and detailed risk analysis of automated bus lane enforcement is conducted. Based on this, an ethical design framework is proposed and applied to the use case. Several potential solutions are created and described. The final chosen solution may also be broadly applicable to other use cases. We show how it is possible to provide an ethical AI solution for detecting infringements that incorporates privacy-by-design principles, while being fair to potential transgressors. By introducing positive, pragmatic and adaptable methods to support and uphold privacy, we support access to innovation that can help us mitigate current emerging risks

    Explainable Artificial Intelligence (XAI) in Insurance

    No full text
    Explainable Artificial Intelligence (XAI) models allow for a more transparent and understandable relationship between humans and machines. The insurance industry represents a fundamental opportunity to demonstrate the potential of XAI, with the industry’s vast stores of sensitive data on policyholders and centrality in societal progress and innovation. This paper analyses current Artificial Intelligence (AI) applications in insurance industry practices and insurance research to assess their degree of explainability. Using search terms representative of (X)AI applications in insurance, 419 original research articles were screened from IEEE Xplore, ACM Digital Library, Scopus, Web of Science and Business Source Complete and EconLit. The resulting 103 articles (between the years 2000–2021) representing the current state-of-the-art of XAI in insurance literature are analysed and classified, highlighting the prevalence of XAI methods at the various stages of the insurance value chain. The study finds that XAI methods are particularly prevalent in claims management, underwriting and actuarial pricing practices. Simplification methods, called knowledge distillation and rule extraction, are identified as the primary XAI technique used within the insurance value chain. This is important as the combination of large models to create a smaller, more manageable model with distinct association rules aids in building XAI models which are regularly understandable. XAI is an important evolution of AI to ensure trust, transparency and moral values are embedded within the system’s ecosystem. The assessment of these XAI foci in the context of the insurance industry proves a worthwhile exploration into the unique advantages of XAI, highlighting to industry professionals, regulators and XAI developers where particular focus should be directed in the further development of XAI. This is the first study to analyse XAI’s current applications within the insurance industry, while simultaneously contributing to the interdisciplinary understanding of applied XAI. Advancing the literature on adequate XAI definitions, the authors propose an adapted definition of XAI informed by the systematic review of XAI literature in insurance

    Explainable Artificial Intelligence (XAI) in Insurance

    No full text
    Explainable Artificial Intelligence (XAI) models allow for a more transparent and understandable relationship between humans and machines. The insurance industry represents a fundamental opportunity to demonstrate the potential of XAI, with the industry’s vast stores of sensitive data on policyholders and centrality in societal progress and innovation. This paper analyses current Artificial Intelligence (AI) applications in insurance industry practices and insurance research to assess their degree of explainability. Using search terms representative of (X)AI applications in insurance, 419 original research articles were screened from IEEE Xplore, ACM Digital Library, Scopus, Web of Science and Business Source Complete and EconLit. The resulting 103 articles (between the years 2000–2021) representing the current state-of-the-art of XAI in insurance literature are analysed and classified, highlighting the prevalence of XAI methods at the various stages of the insurance value chain. The study finds that XAI methods are particularly prevalent in claims management, underwriting and actuarial pricing practices. Simplification methods, called knowledge distillation and rule extraction, are identified as the primary XAI technique used within the insurance value chain. This is important as the combination of large models to create a smaller, more manageable model with distinct association rules aids in building XAI models which are regularly understandable. XAI is an important evolution of AI to ensure trust, transparency and moral values are embedded within the system’s ecosystem. The assessment of these XAI foci in the context of the insurance industry proves a worthwhile exploration into the unique advantages of XAI, highlighting to industry professionals, regulators and XAI developers where particular focus should be directed in the further development of XAI. This is the first study to analyse XAI’s current applications within the insurance industry, while simultaneously contributing to the interdisciplinary understanding of applied XAI. Advancing the literature on adequate XAI definitions, the authors propose an adapted definition of XAI informed by the systematic review of XAI literature in insurance. </p

    Connected and autonomous vehicle injury loss events: potential risk and actuarial considerations for primary insurers

    Get PDF
    The introduction of connected and autonomous vehicles (CAVs) to the road transport ecosystem will change the manner of collisions. CAVs are expected to optimize the safety of road users and the wider environment, while alleviating traffic congestion and maximizing occupant comfort. The net result is a reduction in the frequency of motor vehicle collisions, and a reduction in the number of injuries currently seen as “preventable.” A changing risk ecosystem will introduce new challenges and opportunities for primary insurers. Prior studies have highlighted the economic benefit provided by reductions in the frequency of hazardous events. This economic benefit, however, will be offset by the economic detriment incurred by emerging risks and the increased scrutiny placed on existing risks. We posit four plausible scenarios detailing how an introduction of these technologies could result in a larger relative rate of injury claims currently characterized as tail‐risk events. In such a scenario, the culmination of these losses will present as a second “hump” in actuarial loss models. We discuss how CAV risk factors and traffic dynamics may combine to make a second “hump” a plausible reality, and discuss a number of opportunities that may arise for primary insurers from a changing road environment
    • 

    corecore